nc state
'Brainless' robot can navigate complex obstacles
Researchers who created a soft robot that could navigate simple mazes without human or computer direction have now built on that work, creating a "brainless" soft robot that can navigate more complex and dynamic environments. "In our earlier work, we demonstrated that our soft robot was able to twist and turn its way through a very simple obstacle course," says Jie Yin, co-corresponding author of a paper on the work and an associate professor of mechanical and aerospace engineering at North Carolina State University. "However, it was unable to turn unless it encountered an obstacle. In practical terms this meant that the robot could sometimes get stuck, bouncing back and forth between parallel obstacles. "We've developed a new soft robot that is capable of turning on its own, allowing it to make its way through twisty mazes, even negotiating its way around moving obstacles.
'Butterfly bot' is fastest swimming soft robot yet
"To date, swimming soft robots have not been able to swim faster than one body length per second, but marine animals -- such as manta rays -- are able to swim much faster, and much more efficiently," says Jie Yin, corresponding author of a paper on the work and an associate professor of mechanical and aerospace engineering at NC State. "We wanted to draw on the biomechanics of these animals to see if we could develop faster, more energy-efficient soft robots. The prototypes we've developed work exceptionally well." The researchers developed two types of butterfly bots. One was built specifically for speed, and was able to reach average speeds of 3.74 body lengths per second.
Technique Improves AI Ability to Understand 3D Space Using 2D Images
The work would help the artificial intelligence used in autonomous vehicles navigate in relation to other vehicles, using the two-dimensional images it receives from an onboard camera. A technique developed by researchers at North Carolina State University (NC State) uses two-dimensional (2D) images to improve the ability of artificial intelligence (AI) programs to identify three-dimensional (3D) objects. Called MonoCon, the technique could improve the navigation of autonomous vehicles in relation to other vehicles using 2D images from onboard cameras, which are less expensive than LiDAR sensors. MonoCon can put 3D objects identified in 2D images into a "bounding box," which indicates to the AI the outermost edges of the objects. Said NC State's Tianfu Wu, "In addition to asking the AI to predict the camera-to-object distance and the dimensions of the bounding boxes, we also ask the AI to predict the locations of each of the box's eight points and its distance from the center of the bounding box in two dimensions," which "helps the AI more accurately identify and predict 3D objects based on 2D images."
- North America > United States > North Carolina (0.29)
- North America > United States > District of Columbia > Washington (0.09)
Researchers Fine-Tune Control Over AI Image Generation
The new artificial intelligence method enables the system to create and retain a background image, while also creating figures that are consistent from picture to picture, but which show change or movement. Refined control over artificial intelligence (AI)-driven conditional image generation developed by North Carolina State University (NC State) researchers has potential for use in fields ranging from autonomous robotics to AI training. NC State's Tianfu Wu said, "Like previous approaches, ours allows users to have the system generate an image based on a specific set of conditions. But ours also allows you to retain that image and add to it." The approach also can rig specific components to be identifiably the same, but shifted position or somehow altered.
- Information Technology > Sensing and Signal Processing > Image Processing (0.71)
- Information Technology > Artificial Intelligence > Vision (0.71)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.71)
- Information Technology > Artificial Intelligence > Robots (0.63)
NCSU researchers report breakthrough in creating images though artificial intelligence
RALEIGH – Researchers from North Carolina State University have developed a new state-of-the-art method for controlling how artificial intelligence (AI) systems create images. The work has applications for fields from autonomous robotics to AI training. At issue is a type of AI task called conditional image generation, in which AI systems create images that meet a specific set of conditions. For example, a system could be trained to create original images of cats or dogs, depending on which animal the user requested. More recent techniques have built on this to incorporate conditions regarding an image layout.
- Information Technology > Artificial Intelligence > Robots (0.53)
- Information Technology > Artificial Intelligence > Vision (0.37)
- North America > Canada > Quebec > Montreal (0.15)
- North America > United States > North Carolina (0.06)
- North America > United States > Pennsylvania (0.05)
- (9 more...)
- Health & Medicine (0.96)
- Government > Regional Government > North America Government > United States Government (0.96)
- Education (0.96)
- Information Technology (0.96)
New Data Processing Module Makes Deep Neural Networks Smarter
Artificial intelligence researchers at North Carolina State University have improved the performance of deep neural networks by combining feature normalization and feature attention modules into a single module that they call attentive normalization (AN). The hybrid module improves the accuracy of the system significantly, while using negligible extra computational power. "Feature normalization is a crucial element of training deep neural networks, and feature attention is equally important for helping networks highlight which features learned from raw data are most important for accomplishing a given task," says Tianfu Wu, corresponding author of a paper on the work and an assistant professor of electrical and computer engineering at NC State. "But they have mostly been treated separately. We found that combining them made them more efficient and effective."
Teaching physics to neural networks removes 'chaos blindness'
Researchers from North Carolina State University have discovered that teaching physics to neural networks enables those networks to better adapt to chaos within their environment. The work has implications for improved artificial intelligence (AI) applications ranging from medical diagnostics to automated drone piloting. Neural networks are an advanced type of AI loosely based on the way that our brains work. Our natural neurons exchange electrical impulses according to the strengths of their connections. Artificial neural networks mimic this behavior by adjusting numerical weights and biases during training sessions to minimize the difference between their actual and desired outputs.
Researchers incorporate computer vision and uncertainty into AI for robotic prosthetics
Researchers have developed new software that can be integrated with existing hardware to enable people using robotic prosthetics or exoskeletons to walk in a safer, more natural manner on different types of terrain. The new framework incorporates computer vision into prosthetic leg control, and includes robust artificial intelligence (AI) algorithms that allow the software to better account for uncertainty. "Lower-limb robotic prosthetics need to execute different behaviors based on the terrain users are walking on," says Edgar Lobaton, co-author of a paper on the work and an associate professor of electrical and computer engineering at North Carolina State University. "The framework we've created allows the AI in robotic prostheses to predict the type of terrain users will be stepping on, quantify the uncertainties associated with that prediction, and then incorporate that uncertainty into its decision-making." The researchers focused on distinguishing between six different terrains that require adjustments in a robotic prosthetic's behavior: tile, brick, concrete, grass, "upstairs" and "downstairs."
Talking about how we talk about the ethics of artificial intelligence
If you want to understand how people are thinking (and feeling) about new technologies, it's important to understand how media outlets are thinking (and writing) about new technologies. A recent analysis of how journalists have dealt with the ethics of artificial intelligence (AI) suggests that reporters are doing a good job of grappling with a complex set of questions--but there's room for improvement. To learn more about the work, why they did it, and why it's important, we talked to the researchers who did the work: Veljko Dubljević, corresponding author of the paper and an assistant professor of philosophy at NC State; Leila Ouchchy, first author of the paper and a former undergrad at NC State; and Allen Coin, co-author of the paper and a graduate student at NC State. The paper, "AI in the headlines: the portrayal of the ethical issues of artificial intelligence in the media," was published in the journal AI & Society on March 29. The Abstract: This paper focuses, in part, on ethical issues related to AI technologies that people would use in their daily lives.